34 research outputs found

    Information complexity of the AND function in the two-Party, and multiparty settings

    Full text link
    In a recent breakthrough paper [M. Braverman, A. Garg, D. Pankratov, and O. Weinstein, From information to exact communication, STOC'13] Braverman et al. developed a local characterization for the zero-error information complexity in the two party model, and used it to compute the exact internal and external information complexity of the 2-bit AND function, which was then applied to determine the exact asymptotic of randomized communication complexity of the set disjointness problem. In this article, we extend their results on AND function to the multi-party number-in-hand model by proving that the generalization of their protocol has optimal internal and external information cost for certain distributions. Our proof has new components, and in particular it fixes some minor gaps in the proof of Braverman et al

    Constructive Relationships Between Algebraic Thickness and Normality

    Full text link
    We study the relationship between two measures of Boolean functions; \emph{algebraic thickness} and \emph{normality}. For a function ff, the algebraic thickness is a variant of the \emph{sparsity}, the number of nonzero coefficients in the unique GF(2) polynomial representing ff, and the normality is the largest dimension of an affine subspace on which ff is constant. We show that for 0<Ï”<20 < \epsilon<2, any function with algebraic thickness n3−ϔn^{3-\epsilon} is constant on some affine subspace of dimension Ω(nÏ”2)\Omega\left(n^{\frac{\epsilon}{2}}\right). Furthermore, we give an algorithm for finding such a subspace. We show that this is at most a factor of Θ(n)\Theta(\sqrt{n}) from the best guaranteed, and when restricted to the technique used, is at most a factor of Θ(log⁥n)\Theta(\sqrt{\log n}) from the best guaranteed. We also show that a concrete function, majority, has algebraic thickness Ω(2n1/6)\Omega\left(2^{n^{1/6}}\right).Comment: Final version published in FCT'201

    Revealed Preference Dimension via Matrix Sign Rank

    Full text link
    Given a data-set of consumer behaviour, the Revealed Preference Graph succinctly encodes inferred relative preferences between observed outcomes as a directed graph. Not all graphs can be constructed as revealed preference graphs when the market dimension is fixed. This paper solves the open problem of determining exactly which graphs are attainable as revealed preference graphs in dd-dimensional markets. This is achieved via an exact characterization which closely ties the feasibility of the graph to the Matrix Sign Rank of its signed adjacency matrix. The paper also shows that when the preference relations form a partially ordered set with order-dimension kk, the graph is attainable as a revealed preference graph in a kk-dimensional market.Comment: Submitted to WINE `1

    On Tackling the Limits of Resolution in SAT Solving

    Full text link
    The practical success of Boolean Satisfiability (SAT) solvers stems from the CDCL (Conflict-Driven Clause Learning) approach to SAT solving. However, from a propositional proof complexity perspective, CDCL is no more powerful than the resolution proof system, for which many hard examples exist. This paper proposes a new problem transformation, which enables reducing the decision problem for formulas in conjunctive normal form (CNF) to the problem of solving maximum satisfiability over Horn formulas. Given the new transformation, the paper proves a polynomial bound on the number of MaxSAT resolution steps for pigeonhole formulas. This result is in clear contrast with earlier results on the length of proofs of MaxSAT resolution for pigeonhole formulas. The paper also establishes the same polynomial bound in the case of modern core-guided MaxSAT solvers. Experimental results, obtained on CNF formulas known to be hard for CDCL SAT solvers, show that these can be efficiently solved with modern MaxSAT solvers

    Memory Lower Bounds of Reductions Revisited

    Get PDF
    In Crypto 2017, Auerbach et al. initiated the study on memory-tight reductions and proved two negative results on the memory-tightness of restricted black-box reductions from multi-challenge security to single-challenge security for signatures and an artificial hash function. In this paper, we revisit the results by Auerbach et al. and show that for a large class of reductions treating multi-challenge security, it is impossible to avoid loss of memory-tightness unless we sacrifice the efficiency of their running-time. Specifically, we show three lower bound results. Firstly, we show a memory lower bound of natural black-box reductions from the multi-challenge unforgeability of unique signatures to any computational assumption. Then we show a lower bound of restricted reductions from multi-challenge security to single-challenge security for a wide class of cryptographic primitives with unique keys in the multi-user setting. Finally, we extend the lower bound result shown by Auerbach et al. treating a hash function to one treating any hash function with a large domain

    On the Streaming Indistinguishability of a Random Permutation and a Random Function

    Get PDF
    An adversary with SS bits of memory obtains a stream of QQ elements that are uniformly drawn from the set {1,2,
,N}\{1,2,\ldots,N\}, either with or without replacement. This corresponds to sampling QQ elements using either a random function or a random permutation. The adversary\u27s goal is to distinguish between these two cases. This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary\u27s advantage is upper bounded by Q⋅S/N\sqrt{Q \cdot S/N}. Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of O(log⁥N)O(\log N) if Q⋅S≈NQ \cdot S \approx N. However, the bound\u27s proof assumed an unproven combinatorial conjecture. Moreover, if Q⋅Sâ‰ȘNQ \cdot S \ll N there is a gap between the upper bound of Q⋅S/N\sqrt{Q \cdot S/N} and the Q⋅S/NQ \cdot S/N advantage obtained by known attacks. In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of O(log⁥Q⋅Q⋅S/N)O(\log Q \cdot Q \cdot S/N) on the adversary\u27s advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a hybrid argument that gives rise to a reduction from the unique-disjointness communication complexity problem to streaming

    Lower Bounds on the Time/Memory Tradeoff of Function Inversion

    Get PDF
    We study time/memory tradeoffs of function inversion: an algorithm, i.e., an inverter, equipped with an ss-bit advice on a randomly chosen function f ⁣:[n]↩[n]f\colon [n] \mapsto [n] and using qq oracle queries to ff, tries to invert a randomly chosen output yy of ff, i.e., to find x∈f−1(y)x\in f^{-1}(y). Much progress was done regarding adaptive function inversion - the inverter is allowed to make adaptive oracle queries. Hellman [IEEE transactions on Information Theory \u2780] presented an adaptive inverter that inverts with high probability a random ff. Fiat and Naor [SICOMP \u2700] proved that for any s,qs,q with s3q=n3s^3 q = n^3 (ignoring low-order terms), an ss-advice, qq-query variant of Hellman\u27s algorithm inverts a constant fraction of the image points of any function. Yao [STOC \u2790] proved a lower bound of sq≄nsq\ge n for this problem. Closing the gap between the above lower and upper bounds is a long-standing open question. Very little is known for the non-adaptive variant of the question - the inverter chooses its queries in advance. The only known upper bounds, i.e., inverters, are the trivial ones (with s+q=ns+q= n), and the only lower bound is the above bound of Yao. In a recent work, Corrigan-Gibbs and Kogan [TCC \u2719] partially justified the difficulty of finding lower bounds on non-adaptive inverters, showing that a lower bound on the time/memory tradeoff of non-adaptive inverters implies a lower bound on low-depth Boolean circuits. Bounds that, for a strong enough choice of parameters, are notoriously hard to prove. We make progress on the above intriguing question, both for the adaptive and the non-adaptive case, proving the following lower bounds on restricted families of inverters: - Linear-advice (adaptive inverter): If the advice string is a linear function of ff (e.g., A×fA\times f, for some matrix AA, viewing ff as a vector in [n]n[n]^n), then s+q∈Ω(n)s+q \in \Omega(n). The bound generalizes to the case where the advice string of f1+f2f_1 + f_2, i.e., the coordinate-wise addition of the truth tables of f1f_1 and f2f_2, can be computed from the description of f1f_1 and f2f_2 by a low communication protocol. - Affine non-adaptive decoders: If the non-adaptive inverter has an affine decoder - it outputs a linear function, determined by the advice string and the element to invert, of the query answers - then s∈Ω(n)s \in \Omega(n) (regardless of qq). - Affine non-adaptive decision trees: If the non-adaptive inversion algorithm is a dd-depth affine decision tree - it outputs the evaluation of a decision tree whose nodes compute a linear function of the answers to the queries - and q0q 0, then s∈Ω(n/dlog⁥n)s\in \Omega(n/d \log n)

    The Communication Complexity of Threshold Private Set Intersection

    Get PDF
    Threshold private set intersection enables Alice and Bob who hold sets AA and BB of size nn to compute the intersection A∩BA \cap B if the sets do not differ by more than some threshold parameter tt. In this work, we investigate the communication complexity of this problem and we establish the first upper and lower bounds. We show that any protocol has to have a communication complexity of Ω(t)\Omega(t). We show that an almost matching upper bound of O~(t)\tilde{\mathcal{O}}(t) can be obtained via fully homomorphic encryption. We present a computationally more efficient protocol based on weaker assumptions, namely additively homomorphic encryption, with a communication complexity of O~(t2)\tilde{\mathcal{O}}(t^2). We show how our protocols can be extended to the multiparty setting. For applications like biometric authentication, where a given fingerprint has to have a large intersection with a fingerprint from a database, our protocols may result in significant communication savings. We, furthermore, show how to extend all of our protocols to the multiparty setting. Prior to this work, all previous protocols had a communication complexity of Ω(n)\Omega(n). Our protocols are the first ones with communication complexities that mainly depend on the threshold parameter tt and only logarithmically on the set size nn

    Fine-Grained Cryptography Revisited

    Get PDF
    Fine-grained cryptographic primitives are secure against adversaries with bounded resources and can be computed by honest users with less resources than the adversaries. In this paper, we revisit the results by Degwekar, Vaikuntanathan, and Vasudevan in Crypto 2016 on fine-grained cryptography and show constructions of three key fundamental fine-grained cryptographic primitives: one-way permutations, hash proof systems (which in turn implies a public-key encryption scheme against chosen chiphertext attacks), and trapdoor one-way functions. All of our constructions are computable in NC1\mathsf{NC}^1 and secure against (non-uniform) NC1\mathsf{NC}^1 circuits under the widely believed worst-case assumption NC1⊊⊕L/poly\mathsf{NC}^1 \subsetneq \oplus \mathsf{L/poly}

    From Obfuscation to the Security of Fiat-Shamir for Proofs

    Get PDF
    The Fiat-Shamir paradigm [CRYPTO\u2786] is a heuristic for converting three-round identification schemes into signature schemes, and more generally, for collapsing rounds in constant-round public-coin interactive protocols. This heuristic is very popular both in theory and in practice, and its security has been the focus of extensive study. In particular, this paradigm was shown to be secure in the so-called Random Oracle Model. However, in the plain model, mainly negative results were shown. In particular, this heuristic was shown to be insecure when applied to computationally sound proofs (also known as arguments). Moreover, recently it was shown that even in the restricted setting where the heuristic is applied to interactive proofs (as opposed to arguments), its soundness cannot be proven via a black-box reduction to any so-called falsifiable assumption. In this work, we give a positive result for the security of this paradigm in the plain model. Specifically, we construct a hash function for which the Fiat Shamir paradigm is secure when applied to proofs (as opposed to arguments), assuming the existence of a sub-exponentially secure indistinguishability obfuscator, the existence of an exponentially secure input-hiding obfuscator for the class of multi-bit point functions, and the existence of a sub-exponentially secure one-way function. While the hash function we construct is far from practical, we believe that this is a first step towards instantiations that are both more efficient and provably secure. In addition, we show that this result resolves a long-lasting open problem in the study of zero-knowledge proofs: It implies that there does not exist a public-coin constant-round zero-knowledge proof with negligible soundness (under the assumptions stated above)
    corecore